                          ===========
                            README
                          ===========
                                   

		QLogic FCoE Boot Software Initiator
                  Copyright (c) 2015 QLogic Corporation
                                 All rights reserved.  
                              


1. INTRODUCTION
===============

This file describes the QLogic implementation of FCoE Boot software
initiator.

FCoE ROM driver and its FCoE menu-driven configuration program are stored 
in the device's NVRAM.  This FCoE ROM driver image is loaded to host memory 
by the Multiple Boot Agent (MBA). Therefore, FCoE ROM solution requires 
both FCoE ROM and MBA ROM images.

During a network boot, MBA ROM loads UNDI and FCoE ROM driver to the 
host conventional memory and forms a FCoE networking stack.

The FCoE stack for both cases is shown as follows:

INT 13H system handler
----------------------
FCoE Boot ROM
----------------------
UNDI Driver (which is part of MBA ROM)
----------------------

FCoE boot software hooks system INT 13H handler so that it can redirect
disk access requests transparently from a real-mode operation system to 
remote disk via FCoE protocol. 

Since DOS is purely a real-mode operating system, it continues to use INT 13H 
calls to access the disks after the DOS operating is loaded. For instance, if a 
DOS application is executed or files are copied from the FCoE drives, DOS will 
invoke series of INT 13H calls to access FCoE disks.  For protected-mode OS's
(Windows, Linux), FCoE ROM is not used anymore, when the OS's switches to 
protected mode. 

2. DRIVER COMPONENTS
=====================
The following components are released:

release.txt : documents revision history of FCoE driver and the FCoE 
              configuration program.
readme.txt  : describes overall FCoE functionality and installation procedure.

fcbvMMnn.bb  : This file contains compressed FCoE ROM.

fcoe_setup.sh: This setup script is used to prepare INITRD so that it has 
               required Linux network and FCoE drivers.


3. FCoE PARAMETERS
===================

3.1 Storage of Configuration in NVRAM

Complete FCoE parameters ared stored in device's NVRAM. Users can configure 
FCoE parameters during POST by pressing Ctrl-D.  Users can also configure
these parameters with CCM for Windows/DOS/Linux.

4. INSTALLATION
================

There are a number of services and steps that need to be taken to set up an FCoE 
network.

4.1. Setup Initiator 

Users can program one or more FCoE components on the device's NVRAM by using 
EDIAG.

    * To program MBA on the device if MBA is not part of the system BIOS.
            nvm upgrade -F -mba a:/evpxe.nic

    * To program FCoE initiator and FCoE configuration program
            nvm upgrade -F -feb a:/fcbvMMnn.bb 

4.2. SETUP TARGET DISK

4.2.1 Linux partition

Creating a bootable Linux disk image on the FCoE target involves three major steps:
   * Create DISK image from the clean install.
   * Customize initial RAM disk. 
   * Customize Boot Loader to load customized initrd.img.

4.2.1.1 Creation of Disk Image from a Clean Install

  - Install Red Hat Enterprise Linux 6 (or greater) OR SuSE SLES 11 SP1 (or greater) on 
    a system with identical hardware configuration of the FCoE clients. 

  - Configure network interface not to be shutdown during system shutdown or 
    reboot.

        RedHat: chkconfig --level 0123456 network on
                chkconfig --level 0123456 NetworkManager on

                                  -OR-

        SuSE  : Use Yast configuration tool to change Network runlevel services

  - Install QLogic 577xx/578xx software package, which contains FCoE offload support.

  - Access FCoE target.

  - Copy the entire disk from local drive to FCoE target drive sector-by-sector.

         dd if=/local_disk of=/target_disk bs=1024k
                      
4.2.1.2 Customizing Initial RAM Disk (INITRD)
=============================================
The fcoe_setup.sh script, part of this release, will build and customize an
INITRD image (fcoe-initrd.img) with the necessary support modules to initialize
a network interface and create an FCoE connection.

  - Execute the fcoe_setup.sh script.  It may be necessary to run dos2unix
    on this script, to ensure that it is formatted properly.
	
	./fcoe_setup.sh

Note:  The fcoe_setup.sh script will determine which network driver(s) to 
include in the image.  The script will also create a temporary directory 
(/mnt/fcoe) when creating the new image.  If such a directory already exists, 
please back up the original contents or rename it, as the script will remove 
this directory when it has completed.

4.2.1.3 Customizing Boot Loader 
===============================

First of all, you have to copy customized initrd to the boot partition on the
FCoE target disk.  Assume that the boot partition of the FCoE target is on
/dev/sda1.

   - mkdir /mnt/myboot
   - mount /dev/sda1 /mnt/myboot
   - cp fcoe-initrd.img /mnt/myboot
   
Next, you have to edit boot loader configuration file to load customized
initrd. Use a text editor to modify the following line in
/mnt/myboot/grub/menu.lst

     initrd /fcoe-initrd.img

4.2.1.4 Multipath
=================

If a multipath environment is desired, you will need to make modifications to
the boot loader configuration file (from the previous section) as well as the 
/etc/fstab file on the FCoE target disk.  It will be necessary to modify the 
device name in each file to point to the FCoE target disk alias, as designated 
by the device-mapper.  Using the 'by-id' or 'by-uuid' method is preferred since
the user can specify the unique target disk, rather than relying on the LUN 
ordering (sdX) to specify the boot disk.  These modifications can take place 
before or after imaging the disk from section 4.2.1.1.  If making these 
modifications before imaging, it is recommended to make a backup of each file
modified so that it can be restored to the original state.  Failure to restore
these files could cause problems with booting the original local hard disk on 
subsequent boot ups.



5. LIMITATIONS
==============

5.1 Upgrading drivers on a Linux FCoE Booted system

When attempting to upgrade the QLogic 577xx/578xx package drivers on a system
which has been booted via FCoE Boot, it will be necessary to use the tarball
method ONLY. This is necessary since the RPM uninstall procedure will attempt
to stop the bnx2fcd daemon and subsequently kill the existing FCoE connection.
After installing the updated driver package, it will be necessary to run the
fcoe_setup.sh script to create a new INITRD with the updated drivers.

5.2 Stopping/Restarting network services on a Linux FCoE Booted system

Attempting to run any operation that will cause the system network services to
stop or restart will cause the loss of the existing FCoE boot connection.  This
will render the system inoperable since there is no longer a connection to the 
SAN target.  An example of such an operation would be using the YAST system tool
on SuSE to update network device configurations.  YAST will issue a network
services restart to update its network configuration, and subsequently kill the
FCoE connection.

5.3 Network interface naming convention may be modified on a Linux FCoE Booted
    system

As part of the modified FCoE-boot capable INITRD, the QLogic network drivers
will be loaded in an earlier stage with respect to other possible network 
driver modules.  This has an effect of causing the network ports associated with
the FCoE boot initiator ports to be named eth0, eth1,...,ethX, since the system
is unaware of any other network drivers at that point.  If the original disk
image has other network devices (ex. LOMs), which were named eth0, eth1,...,
ethX, this will cause these other network devices to be renamed. To workaround
this potential issue, it is recommended for the user to modify the network
configuration such that the FCoE initiator ports are enumerated/associated with
the first ethX interfaces prior to imaging the system to the SAN disk.
Another option would be to remove any persistent bindings network devices may
have to a particular ethX interface prior to imaging the system to the SAN disk.

